Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 11.053
Filtrar
1.
J Biomed Semantics ; 15(1): 3, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654304

RESUMO

BACKGROUND: Systematic reviews of Randomized Controlled Trials (RCTs) are an important part of the evidence-based medicine paradigm. However, the creation of such systematic reviews by clinical experts is costly as well as time-consuming, and results can get quickly outdated after publication. Most RCTs are structured based on the Patient, Intervention, Comparison, Outcomes (PICO) framework and there exist many approaches which aim to extract PICO elements automatically. The automatic extraction of PICO information from RCTs has the potential to significantly speed up the creation process of systematic reviews and this way also benefit the field of evidence-based medicine. RESULTS: Previous work has addressed the extraction of PICO elements as the task of identifying relevant text spans or sentences, but without populating a structured representation of a trial. In contrast, in this work, we treat PICO elements as structured templates with slots to do justice to the complex nature of the information they represent. We present two different approaches to extract this structured information from the abstracts of RCTs. The first approach is an extractive approach based on our previous work that is extended to capture full document representations as well as by a clustering step to infer the number of instances of each template type. The second approach is a generative approach based on a seq2seq model that encodes the abstract describing the RCT and uses a decoder to infer a structured representation of a trial including its arms, treatments, endpoints and outcomes. Both approaches are evaluated with different base models on a manually annotated dataset consisting of RCT abstracts on an existing dataset comprising 211 annotated clinical trial abstracts for Type 2 Diabetes and Glaucoma. For both diseases, the extractive approach (with flan-t5-base) reached the best F 1 score, i.e. 0.547 ( ± 0.006 ) for type 2 diabetes and 0.636 ( ± 0.006 ) for glaucoma. Generally, the F 1 scores were higher for glaucoma than for type 2 diabetes and the standard deviation was higher for the generative approach. CONCLUSION: In our experiments, both approaches show promising performance extracting structured PICO information from RCTs, especially considering that most related work focuses on the far easier task of predicting less structured objects. In our experimental results, the extractive approach performs best in both cases, although the lead is greater for glaucoma than for type 2 diabetes. For future work, it remains to be investigated how the base model size affects the performance of both approaches in comparison. Although the extractive approach currently leaves more room for direct improvements, the generative approach might benefit from larger models.


Assuntos
Indexação e Redação de Resumos , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Processamento de Linguagem Natural , Armazenamento e Recuperação da Informação/métodos
2.
Res Synth Methods ; 15(3): 372-383, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38185812

RESUMO

Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while only screening a proportion of candidate records with high priority. In previous studies, screening prioritization is often referred to as automatic literature screening or automatic literature identification. Numerous screening prioritization methods have been proposed in recent years. However, there is a lack of screening prioritization methods with reliable performance. Our objective is to develop a screening prioritization algorithm with reliable performance for practical use, for example, an algorithm that guarantees an 80% chance of identifying at least 80 % of the relevant records. Based on a target-based method proposed in Cormack and Grossman, we propose a screening prioritization algorithm using sampling with replacement. The algorithm is a wrapper algorithm that can work with any current screening prioritization algorithm to guarantee the performance. We prove, with mathematics and probability theory, that the algorithm guarantees the performance. We also run numeric experiments to test the performance of our algorithm when applied in practice. The numeric experiment results show this algorithm achieve reliable performance under different circumstances. The proposed screening prioritization algorithm can be reliably used in real world research synthesis tasks.


Assuntos
Algoritmos , Humanos , Reprodutibilidade dos Testes , Metanálise como Assunto , Probabilidade , Armazenamento e Recuperação da Informação/métodos , Revisões Sistemáticas como Assunto , Automação , Modelos Estatísticos
3.
Res Synth Methods ; 15(3): 441-449, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38098285

RESUMO

The literature search underpins data collection for all systematic reviews (SRs). The SR reporting guideline PRISMA, and its extensions, aim to facilitate research transparency and reproducibility, and ultimately improve the quality of research, by instructing authors to provide specific research materials and data upon publication of the manuscript. Search strategies are one item of data that are explicitly included in PRISMA and the critical appraisal tool AMSTAR2. Yet some authors use search availability statements implying that the search strategies are available upon request instead of providing strategies up front. We sought out reviews with search availability statements, characterized them, and requested the search strategies from authors via email. Over half of the included reviews cited PRISMA but less than a third included any search strategies. After requesting the strategies via email as instructed, we received replies from 46% of authors, and eventually received at least one search strategy from 36% of authors. Requesting search strategies via email has a low chance of success. Ask and you might receive-but you probably will not. SRs that do not make search strategies available are low quality at best according to AMSTAR2; Journal editors can and should enforce the requirement for authors to include their search strategies alongside their SR manuscripts.


Assuntos
Revisões Sistemáticas como Assunto , Humanos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Guias como Assunto , Projetos de Pesquisa , Disseminação de Informação , Literatura de Revisão como Assunto , Correio Eletrônico , Bases de Dados Bibliográficas
5.
J Am Med Inform Assoc ; 30(4): 718-725, 2023 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-36688534

RESUMO

OBJECTIVE: Convert the Medical Information Mart for Intensive Care (MIMIC)-IV database into Health Level 7 Fast Healthcare Interoperability Resources (FHIR). Additionally, generate and publish an openly available demo of the resources, and create a FHIR Implementation Guide to support and clarify the usage of MIMIC-IV on FHIR. MATERIALS AND METHODS: FHIR profiles and terminology system of MIMIC-IV were modeled from the base FHIR R4 resources. Data and terminology were reorganized from the relational structure into FHIR according to the profiles. Resources generated were validated for conformance with the FHIR profiles. Finally, FHIR resources were published as newline delimited JSON files and the profiles were packaged into an implementation guide. RESULTS: The modeling of MIMIC-IV in FHIR resulted in 25 profiles, 2 extensions, 35 ValueSets, and 34 CodeSystems. An implementation guide encompassing the FHIR modeling can be accessed at mimic.mit.edu/fhir/mimic. The generated demo dataset contained 100 patients and over 915 000 resources. The full dataset contained 315 000 patients covering approximately 5 840 000 resources. The final datasets in NDJSON format are accessible on PhysioNet. DISCUSSION: Our work highlights the challenges and benefits of generating a real-world FHIR store. The challenges arise from terminology mapping and profiling modeling decisions. The benefits come from the extensively validated openly accessible data created as a result of the modeling work. CONCLUSION: The newly created MIMIC-IV on FHIR provides one of the first accessible deidentified critical care FHIR datasets. The extensive real-world data found in MIMIC-IV on FHIR will be invaluable for research and the development of healthcare applications.


Assuntos
Nível Sete de Saúde , Disseminação de Informação , Armazenamento e Recuperação da Informação , Pacientes , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas , Humanos , Conjuntos de Dados como Assunto , Reprodutibilidade dos Testes , Registros Eletrônicos de Saúde , Disseminação de Informação/métodos
6.
Rev. cuba. inform. méd ; 14(2): e480, jul.-dic. 2022. graf
Artigo em Inglês | LILACS, CUMED | ID: biblio-1408545

RESUMO

Anemia is the most common blood disorder in the world, affecting millions of people yearly. It has multiple causes and jeopardizes development, growth and learning. Current tools provide non-hematologist doctors with just numbers. The present paper proposes a web application intended to provide doctors at Celia Sánchez Manduley Hospital in Manzanillo with a tool for determining the morphologic type of anemia, creating a list of possible causes, and also storing patient data in a database for future researches. As an information system, the website constitutes a powerful tool for the decision-making process, in particular, the diagnostic process, intended to provide more detailed information on anemia as well as to foster the information management at the hospital. For the application, HTML, CSS, JavaScript and PHP were used as languages; Apache as a web server; CodeIgniter as a PHP framework; MariaDB as a database management system; and Visual Studio Code as a development environment(AU)


La anemia afecta a millones de personas anualmente, esta obedece a múltiples causas y compromete el crecimiento, desarrollo y aprendizaje. Las herramientas actuales ofrecen al médico no especialista en hematología solo cifras. El presente trabajo propone una aplicación web con la cual se pretende brindar a los médicos del Hospital Celia Sánchez Manduley en Manzanillo una herramienta para la determinación del tipo morfológico de anemia, la elaboración de un listado de las posibles causas que la originan, así como el almacenamiento de datos de los pacientes para futuras investigaciones, y con ello contribuir a la toma de decisiones, en particular, al proceso de diagnóstico, y a una mejor gestión de información en el hospital. Para la aplicación se utilizaron los lenguajes HTML, CSS, JavaScript y PHP, el servidor web Apache, el framework PHP CodeIgniter, el gestor de base de datos MariaDB y el entorno de desarrollo Visual Studio Code(AU)


Assuntos
Humanos , Masculino , Feminino , Redes de Comunicação de Computadores , Aplicações da Informática Médica , Linguagens de Programação , Armazenamento e Recuperação da Informação/métodos , Anemia/diagnóstico , Anemia/epidemiologia
7.
Rev. cuba. inform. méd ; 14(2): e520, jul.-dic. 2022. graf
Artigo em Espanhol | LILACS, CUMED | ID: biblio-1408543

RESUMO

Para los neurocientíficos constituye un desafío realizar un seguimiento de los datos y metadatos generados en cada investigación y extraer con precisión toda la información relevante, hecho crucial para interpretar resultados y requisito mínimo para que los investigadores construyan sus investigaciones sobre los hallazgos anteriores. Se debe mantener tanta información como sea posible desde el inicio, incluso si esta pudiera parece ser irrelevante, además de registrar y almacenar los datos con sus metadatos de forma clara y concisa. Un análisis preliminar sobre la literatura especializada arrojó ausencia de una investigación detallada sobre cómo incorporar la gestión de datos y metadatos en las investigaciones clínicas del cerebro, en términos de organizar datos y metadatos completamente en repositorios digitales, recopilar e ingresar estos teniendo en cuenta su completitud, y sacar provecho de dicha recopilación en el proceso de análisis de los datos. Esta investigación tiene como objetivo caracterizar conceptual y técnicamente los datos y metadatos de neurociencias para facilitar el desarrollo de soluciones informáticas para su gestión y procesamiento. Se consultaron diferentes fuentes bibliográficas, así como bases de datos y repositorios tales como: Pubmed, Scielo, Nature, Researchgate, entre otros. El análisis sobre la recopilación, organización, procesamiento y almacenamiento de los datos y metadatos de neurociencias para cada técnica de adquisición de datos (EEG, iEEG, MEG, PET), así como su vínculo a la estructura de datos de imágenes cerebrales (BIDS) permitió obtener una caracterización general de cómo gestionar y procesar la información contenida en los mismos(AU)


For neuroscientists, it is a challenge to keep track of the data and metadata generated in each investigation and accurately extract all the relevant information, a crucial fact to interpret results and a minimum requirement for researchers to build their investigations on previous findings. Keep as much information as possible from the start, even if it may seem irrelevant and record and store the data with its metadata clearly and concisely. A preliminary analysis of the specialized literature revealed an absence of detailed research on how to incorporate data and metadata management in clinical brain research, in terms of organizing data and metadata completely in digital repositories, collecting and inputting them taking into account their completeness. , and take advantage of such collection in the process of data analysis. This research aims to conceptually and technically characterize neuroscience data and metadata to facilitate the development of computer solutions for its management and processing. Different bibliographic sources were consulted, as well as databases and repositories such as: Pubmed, Scielo, Nature, Researchgate, among others. The analysis on the collection, organization, processing and storage of neuroscience data and metadata for each data acquisition technique (EEG, iEEG, MEG, PET), as well as its link to the brain imaging data structure (BIDS) allowed to obtain a general characterization of how to manage and process the information contained in them(AU)


Assuntos
Humanos , Masculino , Feminino , Informática Médica , Aplicações da Informática Médica , Linguagens de Programação , Armazenamento e Recuperação da Informação/métodos , Metadados , Neurociências
10.
Arch. Soc. Esp. Oftalmol ; 97(9): 510-513, sept. 2022. ilus
Artigo em Espanhol | IBECS | ID: ibc-209105

RESUMO

Objetivo Digitalizar nuestro archivo antiguo y evaluar la eficiencia de esta tarea, tanto en términos médicos como económicos. Material y métodos Se recogieron todas las diapositivas y negativos (8254) archivados en nuestro servicio, digitalizándose con un escáner de diapositivas de 5 megapíxeles. La antigüedad de las imágenes comprendía de 1972 a 1999. Se valoró la calidad y utilidad de las imágenes frente al coste del proceso, que ascendió a 2100euros, siendo realizado por el mismo oftalmólogo. esultados De aquellas cuyo paciente era identificable, un 62% ya habían fallecido. Solo un 1,5% del total de diapositivas fueron archivadas para su uso; 70 imágenes por razones docentes y 60 por razones médicas, siendo incorporadas al historial del paciente. Se calcularon unas 210horas invertidas en el escaneo, identificación, comprobación y subida de imágenes. El 84% correspondían a patología retiniana, 4% a patología glaucomatosa, 3% a patología de segmento anterior y el resto a material docente. La calidad de la mayoría de imágenes es buena, y, en algunos casos, fueron cruciales para el diagnóstico correcto de la patología. Si atendemos únicamente a razones asistenciales, la cantidad de diapositivas incorporables a la historia clínica es muy baja en archivos de más de 50 años. Conclusiones Aunque el porcentaje de imágenes escaneadas es bajo, consideramos la tarea eficiente puesto que el coste es bajo. Los archivos de más de 50 años de antigüedad deben ser evaluados antes de su escaneo por su baja utilidad (AU)


Objective To digitise our old archive and evaluate the efficiency of this task, both in medical and economic terms. Material and methods All slides and negatives (8,254) archived in our clinic were collected, digitised with a 5-megapixel slide scanner. The images were taken from 1972 to 1999. Quality and utility of images were taken into account, as far as costs of the task (up to 2,100 euros), all the work done by the same ophthalmologist. Results Of those identifiable, 62% of the patients had already died. Only 1.5% were archived for use; 70 images for teaching reasons and 60 for medical reasons, being incorporated into the patient's history. About 210hours were spent on scanning, identifying, checking and uploading images. 84% corresponded to retinal pathology, 4% to glaucomatous pathology, 3% to anterior segment pathology and the last 9% to learning material. The quality of most images is good, and, in some cases, were important for the correct diagnosis of the pathology. If only medical reasons are taken into account, the number of images uploaded is very low when working with archives older than 50 years. Conclusions Although there was a low percentage of scanned images, the task was efficient because of a low cost. Images older than 50 years must be evaluated before scanning because of their low utility (AU)


Assuntos
Humanos , Armazenamento e Recuperação da Informação/métodos , Gravuras e Gravação , Oftalmologia , Armazenamento e Recuperação da Informação/economia , Eficácia
11.
Sci Rep ; 12(1): 13878, 2022 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-35974033

RESUMO

Compound mixtures represent an alternative, additional approach to DNA and synthetic sequence-defined macromolecules in the field of non-conventional molecular data storage, which may be useful depending on the target application. Here, we report a fast and efficient method for information storage in molecular mixtures by the direct use of commercially available chemicals and thus, zero synthetic steps need to be performed. As a proof of principle, a binary coding language is used for encoding words in ASCII or black and white pixels of a bitmap. This way, we stored a 25 × 25-pixel QR code (625 bits) and a picture of the same size. Decoding of the written information is achieved via spectroscopic (1H NMR) or chromatographic (gas chromatography) analysis. In addition, for a faster and automated read-out of the data, we developed a decoding software, which also orders the data sets according to an internal "ordering" standard. Molecular keys or anticounterfeiting are possible areas of application for information-containing compound mixtures.


Assuntos
Armazenamento e Recuperação da Informação , Software , DNA/genética , Conjuntos de Dados como Assunto/estatística & dados numéricos , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas , Espectroscopia de Ressonância Magnética
12.
Nature ; 608(7921): 217-225, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35896746

RESUMO

Biological processes depend on the differential expression of genes over time, but methods to make physical recordings of these processes are limited. Here we report a molecular system for making time-ordered recordings of transcriptional events into living genomes. We do this through engineered RNA barcodes, based on prokaryotic retrons1, that are reverse transcribed into DNA and integrated into the genome using the CRISPR-Cas system2. The unidirectional integration of barcodes by CRISPR integrases enables reconstruction of transcriptional event timing based on a physical record through simple, logical rules rather than relying on pretrained classifiers or post hoc inferential methods. For disambiguation in the field, we will refer to this system as a Retro-Cascorder.


Assuntos
Sistemas CRISPR-Cas , DNA , Edição de Genes , Expressão Gênica , Armazenamento e Recuperação da Informação , RNA , Transcrição Reversa , Sistemas CRISPR-Cas/genética , DNA/biossíntese , DNA/genética , Edição de Genes/métodos , Genoma/genética , Armazenamento e Recuperação da Informação/métodos , Integrases/metabolismo , Células Procarióticas/metabolismo , RNA/genética , Fatores de Tempo
13.
Stud Health Technol Inform ; 294: 705-706, 2022 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-35612183

RESUMO

Information found in the social media may help to set up infoveillance and track epidemics, identify high-risk behaviours, or assess trends or feelings about a subject or event. We developed a dashboard to enable novice users to easily and autonomously extract and analyze data from Twitter. Eleven users tested the dashboard and considered the tool to be highly usable and useful. They were able to conduct the research they wanted and appreciated being able to use this tool without having to program.


Assuntos
Gráficos por Computador , Armazenamento e Recuperação da Informação , Mídias Sociais , Humanos , Armazenamento e Recuperação da Informação/métodos
14.
Cancer Biomark ; 33(2): 185-198, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35213361

RESUMO

BACKGROUND: With the use of artificial intelligence and machine learning techniques for biomedical informatics, security and privacy concerns over the data and subject identities have also become an important issue and essential research topic. Without intentional safeguards, machine learning models may find patterns and features to improve task performance that are associated with private personal information. OBJECTIVE: The privacy vulnerability of deep learning models for information extraction from medical textural contents needs to be quantified since the models are exposed to private health information and personally identifiable information. The objective of the study is to quantify the privacy vulnerability of the deep learning models for natural language processing and explore a proper way of securing patients' information to mitigate confidentiality breaches. METHODS: The target model is the multitask convolutional neural network for information extraction from cancer pathology reports, where the data for training the model are from multiple state population-based cancer registries. This study proposes the following schemes to collect vocabularies from the cancer pathology reports; (a) words appearing in multiple registries, and (b) words that have higher mutual information. We performed membership inference attacks on the models in high-performance computing environments. RESULTS: The comparison outcomes suggest that the proposed vocabulary selection methods resulted in lower privacy vulnerability while maintaining the same level of clinical task performance.


Assuntos
Confidencialidade , Aprendizado Profundo , Armazenamento e Recuperação da Informação/métodos , Processamento de Linguagem Natural , Neoplasias/epidemiologia , Inteligência Artificial , Aprendizado Profundo/normas , Humanos , Neoplasias/patologia , Sistema de Registros
15.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Artigo em Inglês | MEDLINE | ID: mdl-35042803

RESUMO

Green plants play a fundamental role in ecosystems, human health, and agriculture. As de novo genomes are being generated for all known eukaryotic species as advocated by the Earth BioGenome Project, increasing genomic information on green land plants is essential. However, setting standards for the generation and storage of the complex set of genomes that characterize the green lineage of life is a major challenge for plant scientists. Such standards will need to accommodate the immense variation in green plant genome size, transposable element content, and structural complexity while enabling research into the molecular and evolutionary processes that have resulted in this enormous genomic variation. Here we provide an overview and assessment of the current state of knowledge of green plant genomes. To date fewer than 300 complete chromosome-scale genome assemblies representing fewer than 900 species have been generated across the estimated 450,000 to 500,000 species in the green plant clade. These genomes range in size from 12 Mb to 27.6 Gb and are biased toward agricultural crops with large branches of the green tree of life untouched by genomic-scale sequencing. Locating suitable tissue samples of most species of plants, especially those taxa from extreme environments, remains one of the biggest hurdles to increasing our genomic inventory. Furthermore, the annotation of plant genomes is at present undergoing intensive improvement. It is our hope that this fresh overview will help in the development of genomic quality standards for a cohesive and meaningful synthesis of green plant genomes as we scale up for the future.


Assuntos
Sequência de Bases/genética , Genômica/tendências , Viridiplantae/genética , Biodiversidade , Evolução Biológica , Elementos de DNA Transponíveis/genética , Ecologia , Ecossistema , Embriófitas/genética , Evolução Molecular , Genoma , Genoma de Planta/genética , Genômica/métodos , Disseminação de Informação/métodos , Armazenamento e Recuperação da Informação/métodos , Filogenia , Plantas/genética
16.
Platelets ; 33(3): 416-424, 2022 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-34115551

RESUMO

Platelet function assays and global haemostasis assays are essential in diagnosing bleeding tendencies, with light transmission aggregometry (LTA) as golden standard. The Multiple Electrode Aggregation (Multiplate), platelet function assay (PFA) and rotational thromboelastometry (ROTEM) are mostly used as whole-blood screening tests. Currently, patients have to travel to specialized laboratories to undergo these tests, since specific expertise is required. Pre-analytical variables, like storage time and temperature during transport, are still considered to be the most vulnerable part of the process and may lead to discrepancies in the test results. We aim to give a first impression on the stability of blood samples from healthy volunteers during storage and investigate the effect of storage time (1, 3, 6 and 24 hours) and temperature (4°C, room temperature and 37°C) on the Multiplate, PFA, ROTEM and LTA test results. Our data indicated that, for the PFA, whole blood can be stored for 3 hours at room temperature. Whole blood used for the Multiplate and ROTEM can be stored for 6 hours of storage. For LTA, PRP and whole blood were stable up to 3 hours at 4°C or room temperature and 6 hours at room temperature, respectively.


Assuntos
Bioensaio/métodos , Hemostasia/fisiologia , Armazenamento e Recuperação da Informação/métodos , Testes de Função Plaquetária/métodos , Adulto , Feminino , Humanos , Masculino , Temperatura , Adulto Jovem
17.
J Trauma Acute Care Surg ; 92(1): 82-87, 2022 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-34284466

RESUMO

BACKGROUND: Current data on the epidemiology of firearm injury in the United States are incomplete. Common sources include hospital, law enforcement, consumer, and public health databases, but each database has limitations that exclude injury subgroups. By integrating hospital (inpatient and outpatient) and law enforcement databases, we hypothesized that a more accurate depiction of the totality of firearm injury in our region could be achieved. METHODS: We constructed a collaborative firearm injury database consisting of all patients admitted as inpatients to the regional level 1 trauma hospital (inpatient registry), patients treated and released from the emergency department (ED), and subjects encountering local law enforcement as a result of firearm injury in Jefferson County, Kentucky. Injuries recorded from January 1, 2016, to December 31, 2020, were analyzed. Outcomes, demographics, and injury detection rates from individual databases were compared with those of the combined collaborative database and compared using χ2 testing across databases. RESULTS: The inpatient registry (n = 1,441) and ED database (n = 1,109) were combined, resulting in 2,550 incidents in the hospital database. The law enforcement database consisted of 2,665 patient incidents, with 2,008 incidents in common with the hospital database and 657 unique incidents. The merged collaborative database consisted of 3,207 incidents. In comparison with the collaborative database, the inpatient, total hospital (inpatient and ED), and law enforcement databases failed to include 55%, 20%, and 17% of all injuries, respectively. The hospital captured nearly 94% of survivors but less than 40% of nonsurvivors. Law enforcement captured 93% of nonsurvivors but missed 20% of survivors. Mortality (11-26%) and injury incidence were markedly different across the databases. DISCUSSION: The utilization of trauma registry or law enforcement databases alone do not accurately reflect the epidemiology of firearm injury and may misrepresent areas in need of greater injury prevention efforts. LEVEL OF EVIDENCE: Epidemiological, level IV.


Assuntos
Bases de Dados Factuais , Armas de Fogo/legislação & jurisprudência , Sistemas de Informação Hospitalar/estatística & dados numéricos , Aplicação da Lei/métodos , Saúde Pública , Sistema de Registros , Ferimentos por Arma de Fogo , Adulto , Confiabilidade dos Dados , Bases de Dados Factuais/normas , Bases de Dados Factuais/estatística & dados numéricos , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Humanos , Incidência , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/estatística & dados numéricos , Masculino , Determinação de Necessidades de Cuidados de Saúde , Saúde Pública/métodos , Saúde Pública/normas , Saúde Pública/estatística & dados numéricos , Sistema de Registros/normas , Sistema de Registros/estatística & dados numéricos , Estados Unidos/epidemiologia , Ferimentos por Arma de Fogo/diagnóstico , Ferimentos por Arma de Fogo/epidemiologia , Ferimentos por Arma de Fogo/prevenção & controle
18.
Bol. pediatr ; 62(259): 3-11, 2022. ilus
Artigo em Espanhol | IBECS | ID: ibc-202819

RESUMO

Uno de los retos a los que se enfrentan los profesionalessanitarios es la gestión y organización de la literatura científicaa la que acceden en el desarrollo de sus actividades asistenciales, docentes e investigadoras. Los gestores de referenciasbibliográficas son programas informáticos que ayudan alusuario a almacenar y organizar documentos y referenciasbibliográficas. También facilitan las tareas de citación y elaboración de la bibliografía al redactar una publicación académica.Se revisan las características básicas y la evolución quehan tenido los gestores de referencias gratuitos más completos en el momento actual: Zotero y Mendeley. Finalmente,se describe el funcionamiento de Zotero, incluyendo lasimportantes novedades que incluye la versión 6.Los nuevos gestores de referencias bibliográficas gratuitos pueden cubrir las necesidades de la mayoría de profesionales sanitarios en la gestión integral de sus coleccionesde documentos y referencias (AU)


Managing and classifying biomedical literature storedduring healthcare, teaching, and research purposes is a hugechallenge for health professionals. Reference managers aresoftware that helps scholars to store and organize documentsand bibliographic references. They also make easier to manage bibliographic citations and creating bibliography whenwriting a scholarly manuscript.Basic features and the evolution of the two more complete free reference managers currently available –Zotero andMendeley– are reviewed. Finally, use of Zotero is describedwith more detail, including the new features included inZotero 6 upgrade.New free reference managers are able to meet the needsof most health care professionals for comprehensive management of their documents and references collections (AU)


Assuntos
Armazenamento e Recuperação da Informação/métodos , Informática Médica , Software , Bases de Dados Bibliográficas , Acesso à Internet
19.
Med. paliat ; 29(1): 45-52, 2022. ilus, tab
Artigo em Espanhol | IBECS | ID: ibc-206761

RESUMO

La elaboración de una estrategia de búsqueda en las bases de datos de ciencias de la salud para obtener un resultado de evidencias equilibrado entre sensibilidad y especificidad resulta un auténtico reto. Es imprescindible conocer bien los diferentes tipos de recursos para seleccionar los más apropiados en cada caso, así como los operadores, vocabularios, filtros y otras opciones de la interfaz implementados en cada una de ellos. En este artículo, con un ejercicio práctico a modo de juego, se han descrito los elementos correspondientes a la lista de verificación PRISMA-S para la presentación de una revisión rápida: criterios de elegibilidad de los estudios en formato PICO, fuentes de información obligatorias, estrategias de búsqueda en PubMed y el formulario peer review para describir la metodología de todo el proceso. (AU)


Developing a search strategy in health sciences databases to obtain a balanced evidence result between sensitivity and specificity is a real challenge. It is essential to have a good understand- ing of the different types of resources in order to select the most appropriate in each case, as well as the operators, vocabularies, filters and other interface options implemented in each of them. In this article, with a practical exercise as a game, the elements corresponding to the PRISMA-S checklist for the presentation of a rapid review have been described: eligibility criteria for studies in PICO format, mandatory information sources, search strategies in PubMed, and the peer review form to describe the methodology of the entire process. (AU)


Assuntos
Humanos , Alfabetização Digital , Armazenamento e Recuperação da Informação/métodos , Competência em Informação , Sistemas de Informação em Saúde/organização & administração , Literatura de Revisão como Assunto , PubMed , Revisões Sistemáticas como Assunto , Disseminação de Informação , Publicações de Divulgação Científica , Comunicação e Divulgação Científica
20.
PLoS One ; 16(12): e0261130, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34905557

RESUMO

Natural history collection data available digitally on the web have so far only made limited use of the potential of semantic links among themselves and with cross-disciplinary resources. In a pilot study, botanical collections of the Consortium of European Taxonomic Facilities (CETAF) have therefore begun to semantically annotate their collection data, starting with data on people, and to link them via a central index system. As a result, it is now possible to query data on collectors across different collections and automatically link them to a variety of external resources. The system is being continuously developed and is already in production use in an international collection portal.


Assuntos
Coleta de Dados , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , Botânica , Biologia Computacional/métodos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...